Sigman 1 IEOR 4106 : Continuous - Time Markov Chains
نویسنده
چکیده
A Markov chain in discrete time, {Xn : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n “steps”. Assume throughout that our state space is S = {· · · ,−2,−1, 0, 1, 2, · · ·} (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable Hi called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability Pij , independent of the past, and so on.1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuoustime process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by
منابع مشابه
IEOR 4106 : Notes on Brownian Motion
We present an introduction to Brownian motion, an important continuous-time stochastic process that serves as a continuous-time analog to the simple symmetric random walk on the one hand, and shares fundamental properties with the Poisson counting process on the other hand. Throughout, we use the following notation for the real numbers, the non-negative real numbers, the integers, and the non-n...
متن کامل1 IEOR 4701 : Continuous - Time Markov Chains
A Markov chain in discrete time, {Xn : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic ...
متن کامل1 IEOR 6711 : Continuous - Time Markov Chains
A Markov chain in discrete time, {Xn : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic ...
متن کاملSigman 1 IEOR 4701 : Notes on Brownian Motion
We present an introduction to Brownian motion, an important continuous-time stochastic process that serves as a continuous-time analog to the simple symmetric random walk on the one hand, and shares fundamental properties with the Poisson counting process on the other hand. Throughout, we use the following notation for the real numbers, the non-negative real numbers, the integers, and the non-n...
متن کاملSigman 1 Limiting distribution for a Markov chain
In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n→∞. In particular, under suitable easy-to-check conditions, we will see that a Markov chain possesses a limiting probability distribution, π = (πj)j∈S , and that the chain, if started off initially with such a distribution will be a stationary stochastic process. We will also see that we can find π by merely ...
متن کامل